Goto

Collaborating Authors

 poker game


Inside the Star-Studded, Mob-Run Poker Games That Allegedly Steal Millions From Players

WIRED

NBA stars, mobsters, and marks with fat wallets are all part of an alleged ring of rigged poker games. Here's how these games are assembled, who attends, and how the purported cheating happens. Former NBA player and Portland Trail Blazers head coach Chauncey Billups (center) exits the Mark O. Hatfield United States Courthouse after his arraignment on October 23, 2025, in Portland, Oregon. To the uninitiated, the arrests of Chauncey Billups and Damon Jones last week for allegations of involvement in rigged illegal poker games may have appeared like an unusual collision of worlds. How could prosecutors claim that former NBA players (one a current coach), professional gamblers, and even mafia members all ended up rubbing elbows as part of the same high-tech cheating scheme that allegedly began in 2019 and ran for several years?


How Hacked Card Shufflers Allegedly Enabled a Mob-Fueled Poker Scam That Rocked the NBA

WIRED

WIRED recently demonstrated how to cheat at poker by hacking the Deckmate 2 card shufflers used in casinos. The mob was allegedly using the same trick to fleece victims for millions. Security researcher Joseph Tartaro demonstrates how he can insert a hacking device into a USB on the back of the shuffler that alters its code, then transmits the deck's order via Bluetooth to a phone app. The Deckmate 2 automatic card shufflers used in casinos, cardhouses, and high-end private poker games around the world are designed to shuffle a deck in seconds with perfect, computer-generated randomness, vastly speeding up play. They're also, amazingly, sold with a camera inside that can observe every card in the deck before it's dealt--a fact that's become very convenient for poker-cheating hackers and, allegedly, members of the Cosa Nostra mafia.


Instruction-Driven Game Engine: A Poker Case Study

Wu, Hongqiu, Liu, Xingyuan, Wang, Yan, Zhao, Hai

arXiv.org Artificial Intelligence

The Instruction-Driven Game Engine (IDGE) project aims to democratize game development by enabling a large language model (LLM) to follow free-form game descriptions and generate game-play processes. The IDGE allows users to create games simply by natural language instructions, which significantly lowers the barrier for game development. We approach the learning process for IDGEs as a Next State Prediction task, wherein the model autoregressively predicts the game states given player actions. The computation of game states must be precise; otherwise, slight errors could corrupt the game-play experience. This is challenging because of the gap between stability and diversity. To address this, we train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios. Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs. This work lays the groundwork for future advancements in transforming how games are created and played.


PokerGPT: An End-to-End Lightweight Solver for Multi-Player Texas Hold'em via Large Language Model

Huang, Chenghao, Cao, Yanbo, Wen, Yinlong, Zhou, Tao, Zhang, Yanru

arXiv.org Artificial Intelligence

Poker, also known as Texas Hold'em, has always been a typical research target within imperfect information games (IIGs). IIGs have long served as a measure of artificial intelligence (AI) development. Representative prior works, such as DeepStack and Libratus heavily rely on counterfactual regret minimization (CFR) to tackle heads-up no-limit Poker. However, it is challenging for subsequent researchers to learn CFR from previous models and apply it to other real-world applications due to the expensive computational cost of CFR iterations. Additionally, CFR is difficult to apply to multi-player games due to the exponential growth of the game tree size. In this work, we introduce PokerGPT, an end-to-end solver for playing Texas Hold'em with arbitrary number of players and gaining high win rates, established on a lightweight large language model (LLM). PokerGPT only requires simple textual information of Poker games for generating decision-making advice, thus guaranteeing the convenient interaction between AI and humans. We mainly transform a set of textual records acquired from real games into prompts, and use them to fine-tune a lightweight pre-trained LLM using reinforcement learning human feedback technique. To improve fine-tuning performance, we conduct prompt engineering on raw data, including filtering useful information, selecting behaviors of players with high win rates, and further processing them into textual instruction using multiple prompt engineering techniques. Through the experiments, we demonstrate that PokerGPT outperforms previous approaches in terms of win rate, model size, training time, and response speed, indicating the great potential of LLMs in solving IIGs.


Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis

Gupta, Akshat

arXiv.org Artificial Intelligence

Since the introduction of ChatGPT and GPT-4, these models have been tested across a large number of tasks. Their adeptness across domains is evident, but their aptitude in playing games, and specifically their aptitude in the realm of poker has remained unexplored. Poker is a game that requires decision making under uncertainty and incomplete information. In this paper, we put ChatGPT and GPT-4 through the poker test and evaluate their poker skills. Our findings reveal that while both models display an advanced understanding of poker, encompassing concepts like the valuation of starting hands, playing positions and other intricacies of game theory optimal (GTO) poker, both ChatGPT and GPT-4 are NOT game theory optimal poker players. Profitable strategies in poker are evaluated in expectations over large samples. Through a series of experiments, we first discover the characteristics of optimal prompts and model parameters for playing poker with these models. Our observations then unveil the distinct playing personas of the two models. We first conclude that GPT-4 is a more advanced poker player than ChatGPT. This exploration then sheds light on the divergent poker tactics of the two models: ChatGPT's conservativeness juxtaposed against GPT-4's aggression. In poker vernacular, when tasked to play GTO poker, ChatGPT plays like a nit, which means that it has a propensity to only engage with premium hands and folds a majority of hands. When subjected to the same directive, GPT-4 plays like a maniac, showcasing a loose and aggressive style of play. Both strategies, although relatively advanced, are not game theory optimal.


RLCFR: Minimize Counterfactual Regret by Deep Reinforcement Learning

Li, Huale, Wang, Xuan, Jia, Fengwei, Li, Yifan, Wu, Yulin, Zhang, Jiajia, Qi, Shuhan

arXiv.org Machine Learning

Counterfactual regret minimization (CFR) is a popular method to deal with decision-making problems of two-player zero-sum games with imperfect information. Unlike existing studies that mostly explore for solving larger scale problems or accelerating solution efficiency, we propose a framework, RLCFR, which aims at improving the generalization ability of the CFR method. In the RLCFR, the game strategy is solved by the CFR in a reinforcement learning framework. And the dynamic procedure of iterative interactive strategy updating is modeled as a Markov decision process (MDP). Our method, RLCFR, then learns a policy to select the appropriate way of regret updating in the process of iteration. In addition, a stepwise reward function is formulated to learn the action policy, which is proportional to how well the iteration strategy is at each step. Extensive experimental results on various games have shown that the generalization ability of our method is significantly improved compared with existing state-of-the-art methods.


Texas A&M and Simon Fraser Universities Open-Source RL Toolkit for Card Games

#artificialintelligence

In July the poker-playing bot Pluribus beat top professionals in a six-player no-limit Texas Hold'Em poker game. Pluribus taught itself from scratch using a form of reinforcement learning (RL) to become the first AI program to defeat elite humans in a poker game with more than two players. Compared to perfect information games such as Chess or Go, poker presents a number of unique challenges with its concealed cards, bluffing and other human strategies. Now a team of researchers from Texas A&M University and Canada's Simon Fraser University have open-sourced a toolkit called "RLCard" for applying RL research to card games. While RL has already produced a number of breakthroughs in goal-oriented tasks and has high potential, it's not without its drawbacks.


How Molly Bloom went from 'poker princess' to the 'movie heroine' of 'Molly's Game'

Los Angeles Times

A hotel manager was circling the Polo Lounge, surveying the stately dining room, when he suddenly did a double take. I thought I saw you come in," he said. "Mind if I sit -- just briefly?" Stephen Boggs, the director of guest relations at the Beverly Hills Hotel, slid into the booth where Bloom was having breakfast. The two had met in the early 2000s, when she began hosting underground poker games for the entertainment industry elite in the hotel's private bungalows. She'd returned to the venue last month to talk about a new Aaron Sorkin movie based on her life, "Molly's Game," which follows her journey into the secretive world of high-stakes poker -- one that ultimately led to her arrest by the FBI in 2013. Bloom says that the celebrities who frequented her games -- Leonardo DiCaprio, Tobey Maguire, Todd Phillips -- have never reached out to her following her brush with the feds. But Boggs, at least, seemed ready and willing to welcome her back into the Hollywood fray. After chatting with her for a few moments, he offered her his business card and urged her to follow up with him. "It's so great to see you," he gushed. "You look terrific, and congratulations on everything.


University of Alberta poker bot Deepstack defeats Texas Hold 'em pros - Cantech Letter

#artificialintelligence

The University of Alberta's Computer Poker Research Group created DeepStack, an artificial intelligence program that defeated professional human poker players at heads-up, no-limit Texas hold'em. Apart from this win being the first of its kind, it bares significance in assisting to make better medical treatment recommendations to developing improved strategic defense planning, stated DeepStack: Expert-level artificial intelligence in heads-up no-limit poker, which was published in Science. DeepStack brings together approaches involving games of perfect information, meaning both players see what is on the board, and imperfect information, where it reasons while playing, using intuition through learning to reassess its strategy with every decision. Computing scientist Michael Bowling, professor in the University of Alberta's Faculty of Science and principal investigator on the study, said poker has presented an ongoing challenge to artificial intelligence. "It is the quintessential game of imperfect information in the sense that the players don't have the same information or share the same perspective while they're playing," explained Bowling.


Poker-playing AI beats pros using 'intuition,' study finds

#artificialintelligence

Computer researchers are betting they can take on the house after designing a new artificial intelligence program that has beat professional poker players. Researchers from University of Alberta, Czech Technical University and Charles University in Prague developed the "DeepStack" program as a way to build artificial intelligence capable of playing a complex kind of poker. Creating an AI program that can win against a human player in a no-limit poker game has long been a goal of researchers due to the complexity of the game. Michael Bowling, a professor in the Department of Computing Science in the University of Alberta, explained that computers have been able to win at "perfect" games such as chess or Go, in which all the information is available to both players, but that "imperfect" games like poker have been much harder to program for. "This game [poker] embodies situations where you find yourself not having all the information you need to make a decision," said Bowling.